Goto

Collaborating Authors

 Baltimore


Long-lost page from Greek manuscript discovered in French art museum

Popular Science

This section from Archimedes Palimpsest has a mixture of ancient geometry and Byzantine prayers. The missing page still has traces of geometric diagrams based on Greek mathematician Archimedes of Syracuse's work. Breakthroughs, discoveries, and DIY tips sent six days a week. The Archimedes Palimpsest is a Byzantine prayerbook written in 1229, but the artifact holds more than what immediately meets the eye. The original writing on its pages was erased and replaced--making it a palimpsest--a common practice during the medieval period for expensive writing materials made from animal-skin like parchment.


Phase-Type Variational Autoencoders for Heavy-Tailed Data

Ziani, Abdelhakim, Horváth, András, Ballarini, Paolo

arXiv.org Machine Learning

Heavy-tailed distributions are ubiquitous in real-world data, where rare but extreme events dominate risk and variability. However, standard Variational Autoencoders (VAEs) employ simple decoder distributions (e.g., Gaussian) that fail to capture heavy-tailed behavior, while existing heavy-tail-aware extensions remain restricted to predefined parametric families whose tail behavior is fixed a priori. We propose the Phase-Type Variational Autoencoder (PH-VAE), whose decoder distribution is a latent-conditioned Phase-Type (PH) distribution defined as the absorption time of a continuous-time Markov chain (CTMC). This formulation composes multiple exponential time scales, yielding a flexible and analytically tractable decoder that adapts its tail behavior directly from the observed data. Experiments on synthetic and real-world benchmarks demonstrate that PH-VAE accurately recovers diverse heavy-tailed distributions, significantly outperforming Gaussian, Student-t, and extreme-value-based VAE decoders in modeling tail behavior and extreme quantiles. In multivariate settings, PH-VAE captures realistic cross-dimensional tail dependence through its shared latent representation. To our knowledge, this is the first work to integrate Phase-Type distributions into deep generative modeling, bridging applied probability and representation learning.





1d8dc55c1f6cf124af840ce1d92d1896-Paper-Conference.pdf

Neural Information Processing Systems

As inthe classical problem, weights are fixed by an adversary and elements appear in random order. In contrast to previous variants of predictions, our algorithm only has access toamuch weakerpiece ofinformation: anadditive gapc.


Adversarially Robust Multi-task Representation Learning

Neural Information Processing Systems

We study adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task. In particular, we consider a multi-task representation learning (MTRL) setting, i.e., we assume that the source and target tasks admit a simple (linear) predictor on top of a shared representation (e.g., the final hidden layer of a deep neural network). In this general setting, we provide rates on the excess adversarial (transfer) risk for Lipschitz losses and smooth nonnegative losses. These rates show that learning a representation using adversarial training on diverse tasks helps protect against inference-time attacks in data-scarce environments. Additionally, we provide novel rates for the single-task setting.